1,528 research outputs found

    The unicity of types for depth-zero supercuspidal representations

    Get PDF
    We establish the unicity of types for depth-zero supercuspidal representations of an arbitrary pp-adic group GG, showing that each depth-zero supercuspidal representation of GG contains a unique conjugacy class of typical representations of maximal compact subgroups of GG. As a corollary, we obtain an inertial Langlands correspondence for these representations, via the Langlands correspondence of DeBacker and Reeder.Comment: 23 pages. Updated with minor revisions; to appear in Representation Theor

    How biased are maximum entropy models?

    Get PDF
    Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the second-order maximum entropy distribution on binary data.

    The Two Hundred Mile Economic Zone and Scientific Research

    Get PDF
    Knowledge of the deep ocean floor; continental shelves and margins is basic to many present and future uses of the sea. Such uses include mineral extraction, navigation of surface and subsurface vehicles, construction of structures on the sea bottom and along the margin, and exploitation of living resources of the shelf and of the water column. Uses of the ocean are directly impinged upon by technological changes resulting from research efforts. Applied science is nowhere more visible than in current development of offshore petroleum sources, development of new fisheries and speculation over possibilities for wealth awaiting mankind on the sea floor. Trained scientists have become important to political decision-makers as a resource. As a result of this political calculus, oceanographers find themselves defending their freedom to conduct fundamental research in large portions of the world\u27s oceans and marginal seas

    Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

    Get PDF
    The state-of-the art machine learning approach to training deep neural networks, backpropagation, is implausible for real neural networks: neurons need to know their outgoing weights; training alternates between a forward pass (computation) and a backward pass (learning); and the algorithm needs a large amount of labeled data. Biologically plausible approximations to backpropagation, such as feedback alignment, solve the weight transport problem, but not the other two. Thus, fully biologically plausible learning rules have so far remained elusive. Here we present a family of learning rules that does not suffer from any of these problems. It is motivated by the information bottleneck principle (extended with kernel methods), in which networks learn to squeeze as much information as possible out of the input without sacrificing prediction of the output. The resulting rules have a 3-factor Hebbian structure: they require pre- and post-synaptic firing rates and a global error signal - the third factor - that can be supplied by a neuromodulator. Moreover, they do not require precise labels; instead, they rely on the similarity between the desired outputs. They thus solve all three implausibility issues of backpropagation. Moreover, to obtain good performance on hard problems and retain biologically plausible learning rules, our rules need divisive normalization - a known feature of biological networks. Finally, simulations show that our rule performs nearly as well as backpropagation on image classification tasks.Comment: 20 pages, 2 figure
    corecore